Goto

Collaborating Authors

 adversarial training








Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

Neural Information Processing Systems

However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).




Learning Provably Robust Estimators for Inverse Problems via Jittering

Neural Information Processing Systems

Deep neural networks provide excellent performance for inverse problems such as denoising. However, neural networks can be sensitive to adversarial or worst-case perturbations. This raises the question of whether such networks can be trained efficiently to be worst-case robust.